Massively parallel neural computation

نویسنده

  • Paul James Fox
چکیده

Reverse-engineering the brain is one of the US National Academy of Engineering’s “Grand Challenges.” The structure of the brain can be examined at many different levels, spanning many disciplines from low-level biology through psychology and computer science. This thesis focusses on real-time computation of large neural networks using the Izhikevich spiking neuron model. Neural computation has been described as “embarrassingly parallel” as each neuron can be thought of as an independent system, with behaviour described by a mathematical model. However, the real challenge lies in modelling neural communication. While the connectivity of neurons has some parallels with that of electrical systems, its high fan-out results in massive data processing and communication requirements when modelling neural communication, particularly for real-time computations. It is shown that memory bandwidth is the most significant constraint to the scale of real-time neural computation, followed by communication bandwidth, which leads to a decision to implement a neural computation system on a platform based on a network of Field Programmable Gate Arrays (FPGAs), using commercial off-the-shelf components with some custom supporting infrastructure. This brings implementation challenges, particularly lack of on-chip memory, but also many advantages, particularly high-speed transceivers. An algorithm to model neural communication that makes efficient use of memory and communication resources is developed and then used to implement a neural computation system on the multi-FPGA platform. Finding suitable benchmark neural networks for a massively parallel neural computation system proves to be a challenge. A synthetic benchmark that has biologically-plausible fan-out, spike frequency and spike volume is proposed and used to evaluate the system. It is shown to be capable of computing the activity of a network of 256k Izhikevich spiking neurons with a fan-out of 1k in real-time using a network of 4 FPGA boards. This compares favourably with previous work, with the added advantage of scalability to larger neural networks using more FPGAs. It is concluded that communication must be considered as a first-class design constraint when implementing massively parallel neural computation systems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural Network Implementation in SAS

The estimation or training methods in the neural network literature are usually some simple form of gradient descent algorithm suitable for implementation in hardware using massively parallel computations. For ordinary computers that are not massively parallel, optimization algorithms such as those in several SAS procedures are usually far more efficient. This talk shows how to fit neural netwo...

متن کامل

Transputer Based Massively Parallel Architecture with Computation Intensive Applications 1

In this paper we present a massively parallel reconngurable system architecture, POPA (POstech PArallel computer) with computation intensive applications. It uses a reconng-urable interconnection network which can be dynamically controlled by software, providing the users with great exibility to choose the most eecient interconnection topology for their application. It consists of 64 processing...

متن کامل

Parallel and robust skeletonization built on self-organizing elements

A massively parallel neural architecture is suggested for the approximate computation of the skeleton of a planar shape. Numerical examples demonstrate the robustness of the method. The architecture is constructed from self-organizing elements that allow the extension of the concept of skeletonization to areas remote to image processing.

متن کامل

Parallel neural hardware: the time is right

It seems obvious that the massively parallel computations inherent in artificial neural networks (ANNs) can only be realized by massively parallel hardware. However, the vast majority of the many ANN applications simulate their ANNs on sequential computers which, in turn, are not resource-efficient. The increasing availability of parallel standard hardware such as FPGAs, graphics processors, an...

متن کامل

Integrating Parallel Computations and Sequential Symbol Processing in Neural Networks

The paper presents an analysis of computational processes in neural networks, underlying sequential symbol processing. The main problem addressed in this analysis is that computations in neural networks are massively parallel whereas symbol processing is sequential. It is suggested that the two kinds of processes can be reconciled with each other by the idea of causally constructed representati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013